Assistant Professor | School of Information Science
Only 4 weeks left! 4 groups have been approved by IRB…
But EVERYONE is ready to launch for SONA or has launched
If you get 100 people, start analyzing (but keep collecting)
Group should be working on front end (literature review, hypotheses, methods)
SEM is the root of many, many advanced techniques applying the general linear model.
SEM involves building a MODEL or MODELS of how variables connect
Dependent or mediating variables (variables which are both effects of other exogenous or mediating variables and are causes of other mediating and dependent variables.
SEM typically proceeds in two steps:
Confirmatory Factor Analysis
Measurement Model + Latent Variables + Paths and Covariances connecting them
Purpose is to establish that the structural model has a GOOD FIT to the DATA
That one model FITS the data BETTER than another model
Kline (2016) and Hu and Bentler (1999) are most common.
Notable Scholars: Michael Stephenson, Lance Holbert, Rex Kline, Alan Goodboy
Noar, 2006
For a meta-analysis, you are taking the effect sizes from all studies on a subject and combining them to understand effects ACROSS studies.
Not concerned with statistical significance (well, not really).
Notable scholars: Mike Allen, Seth Noar
One of the assumptions of statistical inference is that observations are independent of one another.
If not, then all responses share a bit of error that biases results.
Consider a classroom. If I asked students to fill out a survey about their instructor, can I assume responses are totally independent?
No! They are reporting on the same instructor.
HLM runs a bunch of regressions at the same time, at multiple levels.
It nests individuals within larger groups (levels) to statistically control for a lack of independent observations.
Notable scholars: Kody Frey (Maybe not notable, but I can do it!), Hee Sun Park, Andrew Hayes (Read him for ANY method, really), Betsy McCoach
Involves using your prior beliefs, also called as priors, to make assumptions on everyday problems and continuously updating these beliefs with the data that you gather through experience.
Let’s assume you live in a big city and are shopping, and you momentarily see a very famous person. Let’s call him X.
Now you come back home wondering if the person you saw was really X.
Let’s say you want to assign a probability to this.
Since you live in a big city, you would think that coming across this person would have a very low probability and you assign it as 0.004.
Bayesian Statistics partly involves using your prior beliefs, also called as priors, to make assumptions on everyday problems.
Mathematically, we can write this as: P (seeing person X | personal experience) = 0.004
The next day, since you are following this person X on social media, you come across his post with him posing right in front of the same store. You are now almost convinced that you saw the same person.
You assign a probability of seeing this person as 0.85.
Mathematically, we can write this as: P (seeing person X | personal experience, social media post) = 0.85
You want to be convinced that you saw this person. So, you start looking for other outlets of the same shop.
You find 3 other outlets in the city. Now, you are less convinced that you saw this person. You update the probability as 0.36.
Mathematically, we can write this as: P (seeing person X | personal experience, social media post, outlet search) = 0.36
Let’s say you want to predict the bias present in a 6 faced die that is not fair.
One way to do this would be to toss the die n times and find the probability of each face. This is commonly called as the frequentist approach. Another way is to look at the surface of the die to understand how the probability could be distributed. Say, you find a curved surface on one edge and a flat surface on the other edge, then you could give more probability to the faces near the flat edges as the die is more likely to stop rolling at those edges. This is the Bayesian approach.
https://medium.com/@shankyp1000/bayesian-statistics-explained-in-simple-terms-with-examples-5200a32d62f8
I have no idea. I’m sure the Bayesian Comm Scholars exist somewhere…
Maybe Andy Pilny??
Growth curve modeling is a technique to describe and explain an individual’s change over time.
Hence the focus on GROWTH
Curran et al., 2010
Everything stems from group differences and associations!
Complex does not equal accurate.
It’s hard to keep up when statistical methods change and develop so frequently.
From Goodboy and Kline (2017):
“Box (1976) put it like this: Overelaboration and overparameterization—that is, unnecessary complexity—is often the hallmark of scientific mediocrity.”